Zeroth-order algorithms for stochastic distributed nonconvex optimization

نویسندگان

چکیده

In this paper, we consider a stochastic distributed nonconvex optimization problem with the cost function being over n agents having access only to zeroth-order (ZO) information of cost. This has various machine learning applications. As solution, propose two ZO algorithms, in which at each iteration agent samples local oracle points time-varying smoothing parameter. We show that proposed algorithms achieve linear speedup convergence rate O(p/(nT)) for smooth functions under state-dependent variance assumptions are more general than commonly used bounded and Lipschitz assumptions, when global additionally satisfies Polyak–Łojasiewicz (P–Ł) condition, where p T dimension decision variable total number iterations, respectively. To best our knowledge, is first result algorithms. It consequently enables systematic processing performance improvements by adding agents. also converge linearly relatively second moment P–Ł condition. demonstrate through numerical experiments efficiency on generating adversarial examples from deep neural networks comparison baseline recently centralized

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method pos...

متن کامل

Stochastic Zeroth-order Optimization in High Dimensions

We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries. Under sparsity assumptions on the gradients or function values, we present two algorithms: a successive component/feature selection algorithm and a noisy mirror descent algorithm using Lasso gradient estimates, and show that both algorithms have convergence rates that depend only loga...

متن کامل

Zeroth Order Nonconvex Multi-Agent Optimization over Networks

In this paper we consider distributed optimization problems over a multi-agent network, where each agent can only partially evaluate the objective function, and it is allowed to exchange messages with its immediate neighbors. Differently from all existing works on distributed optimization, our focus is given to optimizing a class of difficult non-convex problems, and under the challenging setti...

متن کامل

On Zeroth-Order Stochastic Convex Optimization via Random Walks

We propose a method for zeroth order stochastic convex optimization that attains the suboptimality rate of Õ(n7T−1/2) after T queries for a convex bounded function f : R → R. The method is based on a random walk (the Ball Walk) on the epigraph of the function. The randomized approach circumvents the problem of gradient estimation, and appears to be less sensitive to noisy function evaluations c...

متن کامل

A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order

Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Automatica

سال: 2022

ISSN: ['1873-2836', '0005-1098']

DOI: https://doi.org/10.1016/j.automatica.2022.110353